1,613 research outputs found

    Constraints on the neutrino mass and mass hierarchy from cosmological observations

    Get PDF
    Considering the mass splitting between three active neutrinos, we represent the new constraints on the sum of neutrino mass βˆ‘mΞ½\sum m_\nu by updating the anisotropic analysis of Baryon Acoustic Oscillation (BAO) scale in the CMASS and LOWZ galaxy samples from Data Release 12 of the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS DR12). Combining the BAO data of 6dFGS, MGS, LOWZ and CMASS with Planck\textit{Planck}~2015 data of temperature anisotropy and polarizations of Cosmic Microwave Background (CMB), we find that the 95%95\% C.L. upper bounds on βˆ‘mΞ½\sum m_\nu refer to βˆ‘mΞ½,NH<0.18\sum m_{\nu,\rm NH}<0.18 eV for normal hierarchy (NH), βˆ‘mΞ½,IH<0.20\sum m_{\nu,\rm IH}<0.20 eV for inverted hierarchy (IH) and βˆ‘mΞ½,DH<0.15\sum m_{\nu,\rm DH}<0.15 eV for degenerate hierarchy (DH) respectively, and the normal hierarchy is slightly preferred than the inverted one (Δχ2≑χNH2βˆ’Ο‡IH2β‰ƒβˆ’3.4\Delta \chi^2\equiv \chi^2_{\rm NH}-\chi^2_{\rm IH} \simeq -3.4). In addition, the additional relativistic degrees of freedom and massive sterile neutrinos are neither favored at present.Comment: 12 pages, 3 tables, 1 figure; refs adde

    Beamforming and Power Splitting Designs for AN-aided Secure Multi-user MIMO SWIPT Systems

    Full text link
    In this paper, an energy harvesting scheme for a multi-user multiple-input-multiple-output (MIMO) secrecy channel with artificial noise (AN) transmission is investigated. Joint optimization of the transmit beamforming matrix, the AN covariance matrix, and the power splitting ratio is conducted to minimize the transmit power under the target secrecy rate, the total transmit power, and the harvested energy constraints. The original problem is shown to be non-convex, which is tackled by a two-layer decomposition approach. The inner layer problem is solved through semi-definite relaxation, and the outer problem, on the other hand, is shown to be a single- variable optimization that can be solved by one-dimensional (1- D) line search. To reduce computational complexity, a sequential parametric convex approximation (SPCA) method is proposed to find a near-optimal solution. The work is then extended to the imperfect channel state information case with norm-bounded channel errors. Furthermore, tightness of the relaxation for the proposed schemes are validated by showing that the optimal solution of the relaxed problem is rank-one. Simulation results demonstrate that the proposed SPCA method achieves the same performance as the scheme based on 1-D but with much lower complexity.Comment: 12 pages, 6 figures, submitted for possible publicatio

    ExClaim: Explainable Neural Claim Verification Using Rationalization

    Full text link
    With the advent of deep learning, text generation language models have improved dramatically, with text at a similar level as human-written text. This can lead to rampant misinformation because content can now be created cheaply and distributed quickly. Automated claim verification methods exist to validate claims, but they lack foundational data and often use mainstream news as evidence sources that are strongly biased towards a specific agenda. Current claim verification methods use deep neural network models and complex algorithms for a high classification accuracy but it is at the expense of model explainability. The models are black-boxes and their decision-making process and the steps it took to arrive at a final prediction are obfuscated from the user. We introduce a novel claim verification approach, namely: ExClaim, that attempts to provide an explainable claim verification system with foundational evidence. Inspired by the legal system, ExClaim leverages rationalization to provide a verdict for the claim and justifies the verdict through a natural language explanation (rationale) to describe the model's decision-making process. ExClaim treats the verdict classification task as a question-answer problem and achieves a performance of 0.93 F1 score. It provides subtasks explanations to also justify the intermediate outcomes. Statistical and Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy outcomes. Ensuring claim verification systems are assured, rational, and explainable is an essential step toward improving Human-AI trust and the accessibility of black-box systems.Comment: Published at 2022 IEEE 29th ST

    Evaluating and Incentivizing Diverse Data Contributions in Collaborative Learning

    Full text link
    For a federated learning model to perform well, it is crucial to have a diverse and representative dataset. However, the data contributors may only be concerned with the performance on a specific subset of the population, which may not reflect the diversity of the wider population. This creates a tension between the principal (the FL platform designer) who cares about global performance and the agents (the data collectors) who care about local performance. In this work, we formulate this tension as a game between the principal and multiple agents, and focus on the linear experiment design problem to formally study their interaction. We show that the statistical criterion used to quantify the diversity of the data, as well as the choice of the federated learning algorithm used, has a significant effect on the resulting equilibrium. We leverage this to design simple optimal federated learning mechanisms that encourage data collectors to contribute data representative of the global population, thereby maximizing global performance
    • …
    corecore